boolean expression
TiTok: Transfer Token-level Knowledge via Contrastive Excess to Transplant LoRA
Large Language Models (LLMs) are widely applied in real world scenarios, but fine-tuning them comes with significant computational and storage costs. Parameter-Efficient Fine-Tuning (PEFT) methods such as LoRA mitigate these costs, but the adapted parameters are dependent on the base model and cannot be transferred across different backbones. One way to address this issue is through knowledge distillation, but its effectiveness inherently depends on training data. Recent work such as TransLoRA avoids this by generating synthetic data, but this adds complexity because it requires training an additional discriminator model. In this paper, we propose TiTok, a new framework that enables effective LoRA Transplantation through Token-level knowledge transfer. Specifically, TiTok captures task-relevant information through a contrastive excess between a source model with and without LoRA. This excess highlights informative tokens and enables selective filtering of synthetic data, all without additional models or overhead. Through experiments on three benchmarks across multiple transfer settings, our experiments show that the proposed method is consistently effective, achieving average performance gains of +4~8% compared to baselines overall.
- Europe > Romania > Sud - Muntenia Development Region > Giurgiu County > Giurgiu (0.04)
- Europe > Monaco (0.04)
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
- (2 more...)
LogicLearner: A Tool for the Guided Practice of Propositional Logic Proofs
Inamdar, Amogh, Macar, Uzay, Vazirani, Michel, Tarnow, Michael, Mustapha, Zarina, Dittren, Natalia, Sadeh, Sam, Verma, Nakul, Salleb-Aouissi, Ansaf
The study of propositional logic -- fundamental to the theory of computing -- is a cornerstone of the undergraduate computer science curriculum. Learning to solve logical proofs requires repeated guided practice, but undergraduate students often lack access to on-demand tutoring in a judgment-free environment. In this work, we highlight the need for guided practice tools in undergraduate mathematics education and outline the desiderata of an effective practice tool. We accordingly develop LogicLearner, a web application for guided logic proof practice. LogicLearner consists of an interface to attempt logic proofs step-by-step and an automated proof solver to generate solutions on the fly, allowing users to request guidance as needed. We pilot LogicLearner as a practice tool in two semesters of an undergraduate discrete mathematics course and receive strongly positive feedback for usability and pedagogical value in student surveys. To the best of our knowledge, LogicLearner is the only learning tool that provides an end-to-end practice environment for logic proofs with immediate, judgment-free feedback.
- North America > United States > Florida > Hillsborough County > University (0.04)
- South America > Brazil (0.04)
- Europe > Norway > Northern Norway > Troms > Tromsø (0.04)
- Asia > Georgia > Tbilisi > Tbilisi (0.04)
- Research Report (1.00)
- Questionnaire & Opinion Survey (1.00)
- Instructional Material > Course Syllabus & Notes (0.46)
- Education > Curriculum > Subject-Specific Education (1.00)
- Education > Educational Setting > Higher Education (0.66)
- Education > Educational Technology > Educational Software > Computer Based Training (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Search (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Logic & Formal Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- (2 more...)
Compositional Instruction Following with Language Models and Reinforcement Learning
Cohen, Vanya, Tasse, Geraud Nangue, Gopalan, Nakul, James, Steven, Gombolay, Matthew, Mooney, Ray, Rosman, Benjamin
Combining reinforcement learning with language grounding is challenging as the agent needs to explore the environment while simultaneously learning multiple language-conditioned tasks. To address this, we introduce a novel method: the compositionally-enabled reinforcement learning language agent (CERLLA). Our method reduces the sample complexity of tasks specified with language by leveraging compositional policy representations and a semantic parser trained using reinforcement learning and in-context learning. We evaluate our approach in an environment requiring function approximation and demonstrate compositional generalization to novel tasks. Our method significantly outperforms the previous best non-compositional baseline in terms of sample complexity on 162 tasks designed to test compositional generalization. Our model attains a higher success rate and learns in fewer steps than the non-compositional baseline. It reaches a success rate equal to an oracle policy's upper-bound performance of 92%. With the same number of environment steps, the baseline only reaches a success rate of 80%.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > United States > Texas > Travis County > Austin (0.04)
- North America > United States > Arizona (0.04)
- Asia > Middle East > Jordan (0.04)
- Research Report > New Finding (0.46)
- Research Report > Promising Solution (0.34)
The Integer Linear Programming Inference Cookbook
Effective decision-making requires the use of knowledge. This has been a clear, and long-standing principle in AI research, as reflected, for example, in the seminal early work on knowledge and AI--summarized by Brachman and Levesque (1985)--and the thriving Knowledge Representation and Reasoning and the Uncertainty in AI communities. However, the message has been somewhat diluted as data-driven statistical learning has become increasingly pervasive across AI. Nevertheless, the idea that reasoning and learning need to work together (Khardon and Roth, 1996; Roth, 1996) and that knowledge representation is a crucial bridge between them has not been lost. One area where the link between learning, representation, and reasoning has been shown to be essential and has been studied extensively is Natural Language Processing (NLP), and in particular, the area of Structured Output Prediction within NLP. In structured problems, there is a need to assign values to multiple random variables that are interrelated. Examples include extracting multiple relations among entities in a document, where a the two arguments for a relation such as born-in cannot refer to people, or co-reference resolution, where gender agreement must be maintained when determining that a specific pronoun refers to a given entity. In these, and many other such problems, it is natural to represent knowledge as Boolean functions over propositional variables. These functions would express knowledge, for example, of the form "if the relation between two entities is born-in, then its arguments must be a person and a location" (formalized as functions such as x
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Oceania > Australia > Victoria > Melbourne (0.04)
- North America > United States > Utah (0.04)
- (5 more...)
- Research Report > New Finding (0.67)
- Instructional Material > Course Syllabus & Notes (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Logic & Formal Reasoning (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science > Problem Solving (1.00)
Marker and source-marker reprogramming of Most Permissive Boolean networks and ensembles with BoNesis
Boolean networks (BNs) are discrete dynamical systems with applications to the modeling of cellular behaviors. In this paper, we demonstrate how the software BoNesis can be employed to exhaustively identify combinations of perturbations which enforce properties on their fixed points and attractors. We consider marker properties, which specify that some components are fixed to a specific value. We study 4 variants of the marker reprogramming problem: the reprogramming of fixed points, of minimal trap spaces, and of fixed points and minimal trap spaces reachable from a given initial configuration with the most permissive update mode. The perturbations consist of fixing a set of components to a fixed value. They can destroy and create new attractors. In each case, we give an upper bound on their theoretical computational complexity, and give an implementation of the resolution using the BoNesis Python framework. Finally, we lift the reprogramming problems to ensembles of BNs, as supported by BoNesis, bringing insight on possible and universal reprogramming strategies. This paper can be executed and modified interactively.
- North America > United States > Oregon > Multnomah County > Portland (0.04)
- North America > United States > Ohio > Lucas County > Oregon (0.04)
- Europe > Germany (0.04)
- Europe > France (0.04)
- Health & Medicine > Pharmaceuticals & Biotechnology (1.00)
- Health & Medicine > Therapeutic Area > Oncology (0.46)
Towards Explainable Meta-Learning for DDoS Detection
Zhou, Qianru, Li, Rongzhen, Xu, Lei, Nallanathan, Arumugam, Yang, Jian, Fu, Anmin
The Internet is the most complex machine humankind has ever built, and how to defense it from intrusions is even more complex. With the ever increasing of new intrusions, intrusion detection task rely on Artificial Intelligence more and more. Interpretability and transparency of the machine learning model is the foundation of trust in AI-driven intrusion detection results. Current interpretation Artificial Intelligence technologies in intrusion detection are heuristic, which is neither accurate nor sufficient. This paper proposed a rigorous interpretable Artificial Intelligence driven intrusion detection approach, based on artificial immune system. Details of rigorous interpretation calculation process for a decision tree model is presented. Prime implicant explanation for benign traffic flow are given in detail as rule for negative selection of the cyber immune system. Experiments are carried out in real-life traffic.
- Asia > China > Jiangsu Province > Nanjing (0.04)
- North America > United States > Florida > Miami-Dade County > Miami Beach (0.04)
- Europe > United Kingdom > Scotland (0.04)
- Europe > United Kingdom > England (0.04)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Decision Tree Learning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Explanation & Argumentation (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.68)
Boolean Operators in Python
Boolean Expression helps in attesting True OR False given a comparison. They belong to type bool. Comparison Drivers are available for the purpose of comparison. To support these kind of comparisons, we use comparison drivers. For illustration, the before used' 'is one zilches the comparison driver.
Savile Row Manual
We describe the constraint modelling tool Savile Row, its input language and its main features. Savile Row translates a solver-independent constraint modelling language to the input languages for various solvers including constraint, SAT, and SMT solvers. After a brief introduction, the manual describes the Essence Prime language, which is the input language of Savile Row. Then we describe the functions of the tool, its main features and options and how to install and use it.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > Greece > Central Macedonia > Thessaloniki (0.04)
- Research Report (0.50)
- Instructional Material (0.34)
Multiway Storage Modification Machines
We present a parallel version of Sch\"onhage's Storage Modification Machine, the Multiway Storage Modification Machine (MWSMM). Like the alternative Association Storage Modification Machine of Tromp and van Emde Boas, MWSMMs recognize in polynomial time what Turing Machines recognize in polynomial space. Falling thus into the Second Machine Class, the MWSMM is a parallel machine model conforming to the Parallel Computation Thesis. We illustrate MWSMMs by a simple implementation of Wolfram's String Substitution System.
- North America > United States > New York > New York County > New York City (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
xRAI: Explainable Representations through AI
Bartelt, Christiann, Marton, Sascha, Stuckenschmidt, Heiner
In this paper, we use Boolean functions or low arity and low-order polynomials as examples. We present xRAI an approach for extracting symbolic However xRAI can be applied to any function family representations of the mathematical functions efficiently learnable by a neural network. For the case of loworder a neural network was supposed to learn from the polynomials, this has been shown by [Andoni et al., trained network. The approach is based on the idea 2014]. of training a so-called interpretation network that For each family of functions, we train a neural network receives the weights and biases of the trained network called interpretation network (I-Net). The I-Net receives the as input and outputs the numerical representation weights and biases of a λ-Net as input and determines an of the function the network was supposed to approximation of a target function of the trained λ-Net. We learn that can be directly translated into a symbolic train the I-Net offline by systematically training λ-Nets on representation. We show that interpretation nets for different functions from the family and using these trained different classes of functions can be trained on synthetic networks as training examples for the I-Net.